What Could Possibly Go Right with Our AI Future? A Conversation With Reid Hoffman
Tech leader Reid Hoffman shares his insider’s perspective on an AI-powered future and its transformative potential to improve lives and create positive change.
The Malcolm and Carolyn Wiener Annual Lecture on Science and Technology addresses issues at the intersection of science, technology, and foreign policy. It has been endowed in perpetuity through a gift from CFR members Malcolm and Carolyn Wiener.
PAREKH: So welcome to today’s Council on Foreign Relations Malcolm and Carolyn Wiener Annual Lecture on Science and Technology with Reid Hoffman, cofounder of LinkedIn, partner at Greylock, and coauthor of a new book which we will talk about today, Superagency, and a fellow CFR member. I should also note that you’re on the board of Microsoft and early investor in OpenAI. So just since we’re going to be taking about AI, everybody should know that.
HOFFMAN: Yes, cofounder of AI companies, and, and, and—
PAREKH: Yeah, and a lot. This lectureship explores the intersection of science, technology, and foreign policy, and is generously endowed in perpetuity by CFR members Malcolm and Carolyn Wiener. We’d like to extend a welcome to the members of the Wiener family joining us today, both in person and in Zoom.
And I’m Deven Parekh, managing director at Insight Partners, and a member of CFR’s Board of Directors. And I have the pleasure of presiding over today’s discussion.
So, Reid, thank you for joining us today.
HOFFMAN: Pleasure.
PAREKH: You had a busy week. You announced a new book. I think you announced a new company that’s going to cure AI.
HOFFMAN: Cure cancer. (Laughter.)
PAREKH: Oh, cure cancer. Use AI to cure cancer. (Laughter.) Cure AI. We might need that too.
HOFFMAN: (Laughs.) Yes.
PAREKH: But I want to start with, like, a really, actually, basic question. I was talking to my twenty-five-year-old son this morning. He was noting how his brother is only four years younger than him—or, three years younger than him. Uses AI differently than he does. The fact that he started with AI as a freshman is different than, you know, not having done that. I know you use AI to write books. I didn’t get a personalized copy of your book, by the way. I only got the non-personalized—
HOFFMAN: We can fix that.
PAREKH: OK. But how do you integrate AI into your daily life? How is your daily life different today using AI than it was three years ago?
HOFFMAN: Oh, three years ago.
PAREKH: Or, say, a year ago.
HOFFMAN: OK. You had something three years ago, it seems like the stone ages. So, well, part of—and, by the way, the kind of capability—most people don’t realize how rich the capabilities are. So part of one—the book before Superagency, the one that came out on Tuesday, was Impromptu, which is showing—first book on AI, cowritten with AI, to show as well as tell how it operates. And if I were to, for example, write that book today with—it would be massively different, just in, you know, the kind of two years of evolution in this. And part of it is, like, the o1 models, you know, which has gotten a lot more attention because of the DeepSeek stuff, and the ability to do kind of structured, detailed, kind of—like, in this kind of audience, it’s, like, the AI program can do a lot of detailed thinking and a lot of chained reasoning.
And that might sound just a little bit abstract, but it enables all kinds of really important kind of writing, analysis, coding, other kinds of things, use cases. I’d say that probably the primary thing that’s maybe evolved in the last year in all this is not just, you know, kind of the ability to do these various creative projects, but one of the things that I think is going to happen within a small-n number of years for all professionals is—they tend to say, hey, you’re working on these coding assistants because you’re going to make software engineers highly amplified. You know, AI is amplification intelligence. And that’s true. But what people aren’t quite realizing is we all have essentially a software engineering copilot on our PC, on our phone, that’s working with us in our—whatever our professional capacity is.
You know, venture capital investors, but lawyers, doctors, educators, et cetera. Everyone’s going to be able to supercharge how they work with—it could even be basic software engineering in terms of it. And I think that’s already one of the things I’ve started doing. So, like, on my laptop I have a—you know, kind of an orchestrator agent that when I ask for certain kinds of tasks filters it out to other agents, gets feedback, reassembles that information, does various kinds of things. I think that’s just a lens into the future.
PAREKH: And we—if you—so if you were to take this—take that forward a couple of years, and you’re, you know, on the board of some of these companies, you’re investing in these businesses, where do you see the biggest changes coming in the next couple of years with respect to how AI will get used?
HOFFMAN: So I think—I mean, I’ll first elaborate a little bit more on that, and then a couple others. So I think by the end of this year I think we may move to where, if you’re a software engineer and you’re not using multiple copilots in what you’re doing, you’re actually beginning to be, you know, under-tooled, under thing. And I think that won’t be true of, like, the broader scope of professions, but that’s actually the thing that then leads to the amplification for law, medicine, you know, banking, analysis, you know, et cetera, et cetera, et cetera. So I think that’s coming. Now, the other thing—part of the reason why I cofounded this company, Manas, with Siddhartha Mukherjee, targeting cancer, is I think there’s a lot of AI applications that are not just simply our—think of it as our pure language tasks, but a lot of other things where you’re already capable of doing.
Like one of the discussions I was having recently was with someone who was saying, hey, we don’t know how to monitor progress in our construction sites. And today, someone with rudimentary knowledge of these AI things, you could set up a whole bunch of cameras. You could talk with an AI agent about what kinds of things you’re looking for for progress. And it could just be kind of taking these snapshots and analyzing, like, well this is where you are in the construction project, and here’s the things that are happening, and here’s the things that are—that are going fast, and here’s the things that are going slow.
PAREKH: I think we’d all use that. (Laughter.)
HOFFMAN: Yes. And it’s makeable today. Today’s technology—like, it’s not hard to wire it up.
PAREKH: Let’s talk a little bit about the book. I did have a chance to read it this weekend. Why did you choose the title here?
HOFFMAN: And you’re still here? (Laughter.)
PAREKH: Why did you choose the title, Superagency?
HOFFMAN: So when I was—when I look at the general discourse around AI, a lot of it is all of these negatives, fears, what could go wrong. And it ranges from, you know, what’s happening with my data, to am I going to be misinformed about key things—democracy, other things—to job transitions. And all of these things are human agency concerns. Like, am I losing key elements of my agency? And as I started studying kind of the history of these general technologies, which we cover in the book in extensive, you know, printing press, electricity, cars, et cetera, this is always the discourse around the new technologies. Matter of fact, if you look back at the printing press, the, oh my gosh, it’s going to cause the collapse of human society because it’ll be a spread of misinformation that will, you know, rob people of the ability to make good judgments, that’s the dialog around the printing press. And so this kind of thing, and this worry about human agency, is very deep in our psychology, or as we get to new technologies.
And yet, when you look at all these technologies they lead to massive increases in human agency. In particular, the pattern is not just the massive—the superpower that you get as an individual, but also when all of the people around you start getting the same superpower you get to superagency. Because, take a car as a simple parallel, not only do does it change my ability within geographic mobility, but, like, for example, now the doctor has a car, they can come to a house call. You have a sick grandparent, you have a sick child, you know, they can’t move. They can come visit. And that kind of thing is a part of superagency. And so the thesis, my thesis, thesis of the book, is that AI will be giving us superagency.
PAREKH: Well, in the book you actually describe kind of four AI worldviews—doomers, bloomers, zoomers, and boomers. And, you know, you characterize yourself as a boomer.
HOFFMAN: Bloomer.
PAREKH: Bloomer. Sorry.
HOFFMAN: There are boomers too, but—(laughter)—
PAREKH: Can you—can you just kind of—can you kind of define how you think about that?
HOFFMAN: Yes. So one thing that I kind of—sometimes people say, well, you know, doomers, you’re calling names. Like, no, no, this is how they call themselves, which is, you know, Terminator movie. The robots are coming for us. We should do everything possible to stop AI. Gloomer is, well, look, lots of different companies, lots of different industries, lots of different countries are all going to be working create AI. It’s inevitably going to come. But it’s not going to be very good. It’s going to be bad. We’ll have a lot of job loss. And blah, blah, blah. Zoomers are the, hey, technology is great.
PAREKH: Let it go.
HOFFMAN: Create as much as possible. Anything you can do with it, as fast as you can, really, really good. Bloomer is, like, saying, hey, look, this creation of the future technology is really good, like I can imagine having—I have line of sight to a medical assistant on every smartphone that’s better than today’s average GP, that runs for under $5 an hour. And that’s not putting GPs out of work. That’s just—you know, lots of people do not have access to doctors. Do not have access to doctors at, you know, Friday at 11:00 p.m., unless, like, they’re lucky enough to have access to an emergency room, or something like. And it’s, like, that’s line of sight. We should be accelerating that future. The suffering that is between now and when that’s deployed is all on us for not having created it as fast as possible.
So the acceleration is good, but let’s be smart about risks. Let’s be smart about—you know, like, the risk that most, you know, worries me is the transition. We as human beings are very bad at new technologies. Like, say, the printing press. We cannot have anything of our society without it. We can’t have medicine, science, et cetera. And yet, when human beings, you know, the printing press introduced, we had a nearly a century religious war. So let’s navigate these transitions, you know, better learn from the past, but get there. And that’s part of the bloomer perspective because it’s like, OK, what are the challenges in the transition? How do we—how do we navigate that as we are getting to this much more amazing future?
PAREKH: Doomers like to talk about AGI and what’s going to happen when we hit AGI. So maybe let’s start with—we’ll come back to the doomers—but how do you define AGI?
HOFFMAN: So AGI is a little bit of a kind of a Rorschach test, both from a is it a good thing, is it a bad thing, what shape does it have? So everyone has kind of their own thing.
PAREKH: Probably depends on your worldview to begin with.
HOFFMAN: Yes. Very dependent. And, look, I think roughly what everyone’s kind of gesturing at is it’s something that it—that is a computer agent process that is capable of acting at a human level in competence—you know, and by the way, like top 90 percent of human level—across a wide variety of things. And it’s almost like you could say it could be a worker in itself. And that’s roughly what all the definitions are trying to target at, both positive and negative. Sometimes it kind of goes to, oh, it’s super intelligent. That’s a different question there. And so, for me, I roughly look at it as you have a coherent, broad set of tasks that the agent can set its own kind of fitness, function, and goals on and perform and move the—you know, move the task from beginning to completion on its own.
PAREKH: Now you’re back in high school and you’re part of the debate club. Now you have to make the doomer case. What would it be?
HOFFMAN: Well, by the way, I talk to all these folks. And I think it’s good. And I think it’s good to raise the case. The broad doomer case is—and there’s many variations of this—but it’s these devices will become much smarter than us. And as part of much smarter than us, either they will be directed by human beings in hostile to other human beings circumstances or they will say, hey, what do we need you, you know, kind of semi-stupid monkeys, around for? And we’re going to do our own thing, and you’re kind of in our way. And so, you know, I—weirdly, I participate in a lot of these kind of science fiction conversations about, you know, what happens when it’s a super intelligence. You know, like, well, that’s kind of, you know, confusing in various ways.
And it’s not at all clear that we necessarily get to a super intelligence. But the argument is that its clock speed is so much faster than ours that it will be much smarter, and therefore much more capable. And what we have done, as the cognitive beings leading the world, is we’ve reordered the world for ourselves, so that’s the worry for the—
PAREKH: That’s the doomer case.
HOFFMAN: That’s the doomer case.
PAREKH: So it’s a good lead in into kind of regulation, regulatory framework. You have a new administration that came in. They’ve tried to roll back some of the executive orders from the last administration. So I guess start with what do you think in the U.S., and we’ll get to global—in the U.S., what do you think this administration is likely going to do from a regulatory framework? Much different than in prior administrations, you have a lot of AI-focused tech folks as part of the administration. So how do you think about that?
HOFFMAN: So, first, I thought the Biden executive order was actually very well developed and handled. They brought a whole bunch of the leading AI development labs into it, pressed them on what are the key risks, what are the things they can do about it. Having a security plan, doing red teaming, you know, monitoring things that are at a very high scale. I thought all that’s very good. I think a lot of those companies—you know, the vast majority are going to continue to do them anyway, because I think they’re good things to do. I think another thing that’s good about—you know, as we look 2025 forward, is getting—like, including a bunch of deeply competent, you know, kind of technologists in the loop, and kind of evaluating what you’re doing is one of the very good things about what’s happening.
I think that part of the challenge that you have is that AI is fundamentally going to be developed by the commercial sector, for lots of different reasons, and for a lot of good reasons. And so you need people who are from that to be helping guide, understand, participate in what should the dialog around regulation be? And if your theory is those people should be, you know, completely excluded from that process because, oh, look, it’s like they’re coming from the industry that’s building it, the problem is nobody else actually really understands it. The academics don’t understand the scale of it, generally. They’re very smart, but they don’t understand the scale issues. And when those academics happen to go, oh, scale, oh my God, that seems really frightening, they’re doomers, they actually don’t understand how the actual scale of it really fully works, and what are the things to do in terms of navigating it.
And so I think that that’s—I don’t know how to predict it. We’re, what, nine days in, something like that. But—
PAREKH: We’ll go the other way. If you were in charge of AI regulation, what do you think the right construct is, at a broad level?
HOFFMAN: Ah. So, first, you say, what are the things that could go seriously wrong? Rogue states, terrorists, criminals—like, cyber criminals taking down the grid. OK, let’s make sure that we’re protected against that, because those are extreme high impact. Two, we have a bunch of other questions around things, around anything from, you know, kind of, you know, how does information processing in these systems work, and other kinds of—let’s monitor those. Let’s be in discussion about it. Let’s be seeing what things are actually, in fact, working. And, like, for example, the iterative deployment system that worked for cars I think was really good. And it didn’t mean no regulation.
PAREKH: I thought it was a great analogy in the book.
HOFFMAN: Yes. Yeah, exactly. Because eventually it was like, well, we think you should have seatbelts. Industry pushed back. It was, like, no, no, we’re going to push and make sure you get seatbelts. It’s the right thing. But be doing it by looking at what’s actually happening versus imagining, you know, all million things that could possibly go wrong. And so that kind of monitoring and dialog. So, like—you know, like, OK, what are your—like, for example, having companies say, I have a safety plan, a security plan. These are the things I’m testing against. So that when you call them and say, hey, so what have you done to try to—like, what are the bad cases you’re trying to prevent? And what have you done about that? And how are you learning about that? And what are you seeing? They should have answers to all those questions.
PAREKH: It’s interesting. You talked about the threat actors—terrorists, or—
HOFFMAN: Yeah.
PAREKH: When you can run an open-source model on a mobile phone, how do you actually even regulate that?
HOFFMAN: Well, it’s one of the reasons why I’ve been very—saying, look, we need to be cautious and evaluative about what we open source. And, look, I say this is a massive open source advocate. I was on the board of Mozilla for eleven years. We open sourced all kinds of software at LinkedIn. And part of it is, unfortunately, even starting with the name, which is open source software has the benefit everyone can see the software, and the benefit is you can analyze it in ways that you can look at the safety and improve the safety of open source software. What actually is being talked about is open model weights, which, you know, is a performative cognitive system, but actually, in fact, you don’t really gain any analytic benefit from it being out there. You gain a utility benefit. But, by the way, part of that utility benefit is I can easily refactor it to say, you know, give me advice on how I could make anthrax, deliver it, you know, et cetera, et cetera. And it’s, like, OK—(laughs)—let’s not have that running around, please. (Laughter.)
PAREKH: Well, so, on that, and we’ll get to China in a second, but, you know, even if you look at—if you look at data, and you look at Europe with GDPR, and how different that is than U.S. standards, and the EU is talking about an EU AI Act that’s much more restrictive than the U.S. Do you think there’s actually an ability to harmonize these policies? And keep China out of it for one second. We’ll bring them in, in a second. But even amongst the Western world, can we harmonize?
HOFFMAN: Well, fundamentally, I think it’s very difficult. I’ve been trying to persuade various European leaders—and I’ll share with you the metaphor that I gave them, because it brings China in a little bit, but we’ll talk about China a lot more—to try to understand these things. Which is, said, look, if you look at this—and I was using the word football, but it’s for European—you know, World Cup, right, not how the Americans think of this. Said, look, if you think of this as a football game between the Chinese and the Americans, and what EU is trying to be is the referee, you have a couple problems.
First, the referee never wins, right? (Laughter.) Two, people don’t like the referees very much. (Laughter.) And—
PAREKH: Hopefully that stuck. (Laughter.)
HOFFMAN: Well, that was part of the reason—it got repeated back to me, so I think it’s actually a bit of going around. And I think one of the important things to say, this is—part of the way I describe what we’re in, is we’re in the cognitive industrial revolution. And that means that the industries, the societies that embrace this, have the same kind of derivative benefits of those societies and industries that embraced the industrial revolution. I think the parallel is deep in a lot of different ways. And so that’s the reason why, like, the game is afoot on this stuff. And so developing into the future is really important. And innovation is done by this kind of iterative development and deployment, by trying it, which is not what the EU fundamentally orients on.
So, for example, when they were passing GDPR, I was, like—and I knew this kind of AI stuff was, you know, in development about how you do learning, it’s, like, you’re literally saying, don’t do any of this industry here. Like, we could be the consumers of it, but we’re not going to build any of it. That’s terrible for you, and terrible for the world. And so being development forward and taking some risks, as a function of that, and saying, hey, we know that we will have some things that will go wrong, because that’s what innovation and risk look like, but we’re going to do that is actually super important. So until you can harmonize on that principle, it’s very hard to harmonize on any of the rest.
PAREKH: Well, let’s talk about China for a second. It’s the one, maybe, area of bipartisan agreement in today’s polarized world. (Laughs.) And in the last administration, obviously, you had the CHIPS Act. You had export controls to make it harder for them to get access to the chips. And, of course, last week, we had the big announcement of DeepSeek. I think we probably both agree it wasn’t $5 million. But let’s just assume that it was a fraction of what maybe some others have spent. The question is, A, was it an unintended consequence of export controls to have China say, well, you know, invention is the mother—invention of all necessity, and it basically drove them to create this kind of product on a much more cost-effective basis?
HOFFMAN: Well, the first thing is the thing about these expert controls is they’re not preventing. They’re slowing down. Yes, that is also causing, let’s say, how do we be much more efficient in how the things we do, which are things we can potentially also relearn from in ways, which is great. I think that the story around DeepSeek, part of the whole kind of media circus around this stuff, the story is radically incomplete in some important ways. Like, I think almost for sure they had some kind of access to large models, which means you couldn’t do the work they did without access to large models. And I think this is one of the things I already knew was coming, which is the developers of large models will be facilitating the development a lot of smaller models that are effective. And that’s—which version of that they did, I don’t think we know yet. But there is that version.
The other thing is, is their announced dollar amount is—people don’t realize that that makes sense on a reasonable compute cluster for the last training run, right? But when people say, hey, it cost $100 million or a billion dollars to do this, that wasn’t the last training run. That was all the iteration that came up to it.
PAREKH: Up to that point.
HOFFMAN: Right, in terms of making that happen. And so, you know, I think there’s going to still be—there continues to be the exact massive advantages to scale compute. And if you talk to any of the Chinese entrepreneurs and investors, they are feeling that we want the scale compute. The scale compute, actually, in fact, is really important. Doesn’t mean you can’t follow the scale compute. That’s what is essentially happening. And so—and, you know, part of it is I think the important lessons are—you know, when I was going around the last couple years saying, hey, this is a—this is an economic development race with the Chinese, I’d frequently get pushback from people saying, well, you’re saying this as an industrialist because you just want to be—you know, to allow us to—allow you to continue to operate and invest in speed.
I was like, no, no, I’m saying this for society and our industry. And I think that demonstrates that, you know, very strongly. And I think that the question is, you know, to realize that a lot of the stuff—like, for example, one thing that an interesting thing to look at is DeepSeek offered as a service from the corporation and DeepSeek run as an independent model, because, like, you can see the, like, Chinese censorship, and so forth. And the—
PAREKH: Yes. You can ask certain questions and you don’t get an answer.
HOFFMAN: Yes, right?
PAREKH: Tiananmen is an example.
HOFFMAN: Yes, right. And so, it’s, like, look this game is on. And which technologies do we want to be setting the standard for the world?
PAREKH: Do you think it ends up—if you look at the internet, you have kind of a Chinese internet and you have a Western internet. Do we end up with a Chinese AI and a Western AI that are totally separate?
HOFFMAN: Well, very likely. And actually, I—on the internet, for at least fifteen years, I’ve been saying fundamentally there’s kind of three internets. There’s the English internet. There’s the Chinese internet. And there’s the all else, right? And I think part of our mission is to say, make the English internet, in this case, like most highly functional and useful and humanity elevating, and also have the everything else internet go I want to be part of the English internet as part of this. And I think that’s the same parallel within AI.
PAREKH: So I’m going to ask two more questions, then we’ll open up to the audience. Maybe just hitting on Monday’s announcement of the cancer. Maybe you can put it in a broader context. Why do you believe that AI is maybe almost creating a Moore’s Law in biotech with the speed with which we’re going to be able to develop drugs?
HOFFMAN: Well, so—
PAREKH: And, obviously, you’re welcome to talk about your company in that context. (Laughs.)
HOFFMAN: No, no, that’s fine. I mean, we can—I don’t want anyone thinking I’m shilling for anything. (Laughter.) But fundamentally—like, one of the things when we look back—we’re talking about, like, some of the thing that’s happening with AI is a massive acceleration of an ability to do certain kinds of cognitive tasks. So how do we take that acceleration and apply it to drug discovery? And what are the ways to do that in a good way?
And so what Sid and I did is we looked through, from literally every step of the drug discovery process, from, you know, I might have an idea, or I think we should target, you know, triple negative breast cancer, to drug, and said, OK, what are the places where the current tools and the future tools will make a 10X-plus acceleration in the number of candidates and the ability to evaluate them, and the ability to, you know, kind of test them, and kind of think about that, and then add that in and refactor the whole process, where you take—bringing the best of AI and the best of science into that? And that’s essentially what the company is doing. And it’s very early. We have only a small number of employees. We’re based here in New York.
PAREKH: We’ll see more of you.
HOFFMAN: Yes. (Laughs.) Exactly. And so that kind of—and I think part of it—A, it’s just the right thing to do, right? I mean, cancer threatens, you know, all of us. And everything from children to, you know, every age you know, every gender, every race, every, you know, duh, duh, duh. And so I think the thing. But it’s also the kind of thing when people are thinking about, like, well, why is it you’re so bullish about how AI can make a huge difference for humanity? It’s one of those instances. And I think there’s a number of them that aren’t just, hey, I have a chatbot that’s interesting to talk to and helps me with my work.
PAREKH: So my last question will be on social media. I’ll go away from AI for a second. Which is the social media platforms, including LinkedIn, have a pretty big influence on public opinion. And over the last couple weeks, you’ve seen a pretty big change in how content moderation is likely being done. Facebook made a pretty big announcement that they’re moving to effectively community notes model. What’s your kind of perspective? And it doesn’t have to be Facebook specific, but what’s the responsibility of the platforms? And what’s the best way to actually address that?
HOFFMAN: So, I mean, I one thing I always start with in answering this question is say, I’m pretty happy with LinkedIn. I think we’re doing a good job. (Laughter.) And, you know, we try to—try to—try to do what we think is the right thing. And are always taking feedback in order to improve look.
Look, I think one of the things we need all of our media ecosystems to be, which I think include—absolutely include social networks, also includes cable news, talk radio, a bunch of other things—is how do we have these media ecosystems be a collective learning system, where we are learning—where our dialog and discussions with them progresses us towards, you know, truths? And, in particular—and I’m not saying this example just because we happen to be in this week with RFK—but, like, a really good litmus test is how is kind of antivax nonsense handled? Because there is a truth that literally all scientific wisdom says vaccines are extremely helpful, have been amazing at eliminating key diseases, of saving many, many millions of lives, et cetera, and that it is, even with individuals who sometimes might have a bad side effect, even those individuals are much better off participating in a system where we are protected from these, you know, disastrous diseases.
And if your media ecosystem is saying, hey, you know, that antivax thing, or vax thing, that’s a political statement, that’s a—you know, then you have a truth learning problem, right? And we want to be, like, how do we get it so that these are—that they have truth improvement? That doesn’t—and, by the way, truth improvement doesn’t mean everything said is—like you can say on a social, and any system, like, you know, hey, I think the Earth’s flat, or the moon landing is faked, and whatever. You just—you want it to be the, yeah, yeah, we understand that that’s crazy town.
PAREKH: OK, with that I want to open it up to audience questions. Just a reminder that we are on the record. And with that, we’ll start in the room. Just please state your name and affiliation.
Q: Thanks. Maryum Saifee, CFR member.
My question is on sort of the AI governance piece that was discussed. You know, you talked about the need for regulatory guardrails, but we don’t want to stifle innovation the way the EU has. Federally, you know, we—well, the U.S. is a decentralized democracy so you have subnational actors, mayors and governors, crafting their own kind of policies. California has its own version of a GDPR that’s different, but sort of what are your thoughts on giving the federal government potentially—or, your thoughts on the federal government on regulation? What guardrails could subnational actors kind of experiment with and play with on governance, so that we get it to the bloomer stage versus the—you know, on the balancing act?
HOFFMAN: So part of the thing I try to encourage all initial regulatory thing to be is, you know, how do you ask the questions about what you’re trying—what harms you’re trying to prevent? And then how do you create measurement for that? So you’re looking at—so, as opposed to the, well, I can imagine that there is this particular circumstance, and so therefore I want to set up a whole bunch of thou shalt not and thou shalt only in order to do that. And that’s overly presuming any of our intelligence, including my own. So, you know, to give you an example, it’s obviously very important that we don’t show terrorist acts on, you know, kind of social media, media of any sort, in ways that are encouraging it, facilitating the degradation of society. So the tendency in regulation is to say, you know, well, OK, you shall have a fifteen minute delay, you know, or something like that, as kind of a way of doing it.
That’s actually not the right way to do this. The right way to do this is to say, here’s what the—here’s what the measurement looks like. And you have to be audited by your own auditors. And here’s what the penalty fee structure looks like. So if one instance is shown, well, it’s not so much, because they’re, like, one doesn’t really affect society. When you get to 10,000 people have seen it, now we’re charging you this. When you get to 100,000, now we’re charging you this. And then let them figure out the innovation. Apply all of that innovation energy to go, OK, we’re going to—we don’t want those fees. We’re going to figure out how to do it. And that’s kind of a zone to think about, like, how is it that you’re trying to apply technology being part of the solution to these ills and the innovation that’s within the companies and labs to be on the side of what we’re trying to do? And I think that’s the better way generally to approach these things.
Q: Esther Dyson. Hi, Reid. Working on a book about—called Term Limits.
My question is kind of following up on that. It seems to me the real problem is predatory business models, because AI enables them to scale exponentially, blah, blah, blah. And I’d just love to hear you unpack that a little more.
HOFFMAN: Well, a lot of AI developers and companies will be very happy to hear that there’s business models. (Laughter.)
PAREKH: So will VCs. (Laughs.)
HOFFMAN: Yes. And so, look, I think part of the question—look, AI is superpowers. And so when you add superpowers to bad things, that’s a challenge. That’s essentially what Esther is gesturing. And that’s part of the reason why I was like, rogue states, terrorists, et cetera. Now, if you say, hey, let’s make it like, hey, I’m going to use AI to try to get more people to smoke cigarettes in various ways, you know, as an instance, like, OK, that—you can imagine that would be an instance of a very bad thing. And I think that that, in terms of, kind of, like, regulation, and things, I think, again, it’s, like, as opposed to the raw, like, baseline, general technology, like, I think you have to kind of look at what the application layer looks like. And so I think the dialog should be, in this case, around the application layer.
And then I think part of the dialog should be is, like, well, where are the places it is critical that it’s forbidden. And that—by the way, some harm is not where it’s critical to forbid. It’s, like, really bad harm at scale. And then there’s areas—other areas—like, when you get to—part of reason I was using tobacco and cigarettes is, like, well, you could say, well, you have to have these warning labels or transparency and participation. That could be another. And so I’m still—I think there will be areas that we will need to navigate there. (Laughs.) But I’m still—like, I wouldn’t slow down our development of AI because of that navigation. I think we have to discover it as we go.
PAREKH: We’ll go to a virtual question.
OPERATOR: We will take our next question from Marc Rotenberg.
Q: Hi. This is—this is Marc Rotenberg. We’re the Center for AI and Digital Policy. Nice to see you, Reid, and congratulations on the book.
I know you talked a lot about the analogy to the industrial age. And to be certain, industrialization brought a lot of innovation and progress and societal benefit. But at the same time, the disruption during that era was, of course, extraordinary, with challenges to public health, safety, labor conditions, extraordinary concentrations of wealth. In a recent book, power and progress, Daron Acemoglu argues that there’s nothing about innovation that necessarily tends toward broad social benefit. His analysis, going back to windmills, is that it tends toward concentration, in the absence of political institutions that are established to help protect public interest. So, you know, respecting all of your great work on the innovation front, I wonder what your advice is to establish those political institutions that help ensure broad social benefit.
HOFFMAN: Thanks, Marc. Let’s see. A couple of things. So first is, we’ve always had a various kind of concentration of wealth, concentration of power, like you say. Well, before the industrial revolution it was like the nobility and the peasants. So it’s not—(laughs)—you know, it’s not that that is a new problem just brought about by the industrial revolution.
Now, that being said, I do think it’s a very good thing to say, how are we making sure that there is broad-based elevation of society and of humanity across the entire spectrum? And as one parallel, you know, one of the things that’s interesting. You think about when technology, especially now, is developed for hundreds of millions and billions of people, it tends to be equal—like, equivalently provisioned across everyone. So, like, the iPhone that Tim Cook uses is the same one that the Uber driver that is taking you away from here is using, right? And so I think that if you look at, like, what ChatGPT and how these things are operating, it’s operating on that scale. That doesn’t mean that there’s not an issue here. It’s that there is also a broad elevation and bringing into kind of social benefit in that, in the same way that there is with smartphones.
Now, that being said, as a general strategy for public institutions I think that part of the key strategy—the thinking has to be, look, we know that we have to build this technology to benefit to our citizens, our industry, our society. We know that if we don’t do it, other people will—DeepSeek, et cetera. And we want to make sure that this is, you know, built within kind of our ethical compass. And so you have to have a theory of how it’s built. And I don’t know of a good theory other than by building it through the companies, right? I think it’s literally all, like, fiction, right, other than that. (Laughs.)
So then you say, well, what does that mean for governance? And so the answer is, we need to be thinking very hard about how we do public-private partnerships. That’s part of the reason why the early regulatory question is, like, how do you measure things? And, like, for example, people forget about all these tools. Like, look, one of the things we did in corporate governance was we said, well, you have to have auditors, and the auditors have to validate stuff to the public. It’s, like, well, what are the things you would—they’d say, hey, I want to know that you, AI developer people, are doing X, Y, and Z. And your auditors have to audit it. And maybe they don’t have to, like, publish it to the public, but has to be accessible to government inquiry. Like, what does that look like? You know, it’s kind of—as the—as the kind of mechanisms and ways of doing these things.
And then the work becomes, you know, rather than kind of trying to, you know, simply, you know, bluntly go, I’m alarmed. Go, well, what should those audit mechanisms look like? What’s the specific work? What’s that kind of thing? And that’s how you navigate to a positive future, in this case. And so that kind of thing is, I think, part of—you know, is the beginning of a broad-brush answer. But, you know, obviously we’re—
PAREKH: It does feel like the rate of change is so high, for government to keep up is a real challenge.
HOFFMAN: Yes. And that’s—once again, private—like, how do you do private-public? As opposed to saying, we’re just going to have the public institutions work as fast? No. We’ve developed these organizations called corporations, which are really, really good at taking risk, doing innovation, moving fast in a competitive environment. That’s part of the reason why we developed these organizations over centuries. Let’s harness that to an advantage, whereas it is to an advantage. Like, the classic one in parallel is, like, climate change. Like, how do we get green energy much more in development in this mechanism as a way of trying to address climate change?
PAREKH: Here in the front.
HOFFMAN: Speaking of—
Q: Hi. Lauren Racusin, Bloomberg Associates.
I was hoping you could speak a bit more to the limitations of the American labor supply. Using your analogy of the soccer team, the American soccer team, how do we strengthen the players, thinking about workforce development?
HOFFMAN: So, like to the earlier transition point which Marc brought up, like, these transitions will be difficult. There is nothing that makes them not difficult. The question is, can we learn to make them more human and more graceful, and not necessarily the target is graceful, more graceful? And so part of it is you say, well, a lot of humans are potentially going to get displaced by other humans using AI. How do we help those humans become those humans using the AI? So, like, well, AI can be part of the solution. It’s like, OK, how do you do, you know, kind of education? How do you do—how do you do these different jobs? If you are getting displaced, how do you help find other work?
What I want is I want people building the AIs to help with all that too. It’s part of the kind of the amplification intelligence. It’s part of the reason why I was doing—like, when I wrote Impromptu was, like, taking all the areas where people were saying, like, this is the end of journalism, this is the end of education. Was like, well, here’s how you could use AI in order to help with it. And that’s a similar kind of thing here as trying to navigate this. Now I really am truly not trying to undercut that the transitions will be genuinely difficult. But let’s start working on it and navigating, is the thing I’m trying—that’s part of reason the kind of the superagency and other things as an articulation.
And now speaking of clean energy.
Q: Of energy and climate? Yeah. Jason Bordoff, lead the center on Global Energy Policy at Columbia University.
So, not surprising, I had a question about energy. Because one of the many implications of AI seems to be every week there’s another multibillion-dollar announcement about bringing nuclear power plants back online, natural gas permitting reform to build infrastructure again. And then to your question about DeepSeek. Suddenly DeepSeek happens, every energy stock price collapses because we suddenly think we don’t need all this energy. We can use much simpler chips to do all of this. So I was wondering if you could talk about compute power, and what lesson, if any, actually comes from what DeepSeek was able to do. Should it change the way we think about how energy intensive this is going to be? And is all of this talk of energy intensive power hungry AI overhyped, or not?
HOFFMAN: I think—so, one is I think we will need a lot of energy and a lot of compute because if you think about the fact that every single possible physical device could have intelligence added to it, is actually a really good thing. Not just your PC, not just your phone, but also your speaker, your car, your light switch. (Laughs.) Right? Everything. And making that all intelligent is just like the general rule around electricity, which is there’s infinite demand at a certain price. So if you can get it delivered that way, then there’s infinite demand for potentially using it.
And I think that will be true for the use of compute, let alone the training of these systems. Now, one of the things that’s somewhat too often repeated is, like, oh, this AI, it’s got an exponential curve and it’s going to have this exponential energy powering it, and so it’s negative on climate change. There’s lots of ways of which it’s actually, in fact, very positive. One, all the hyperscalers are committing to green datacenters. And part of doing that is they’re putting in billions of dollars of essentially venture purchase orders to geothermal, nuclear, all this clean energy, which then helps those industries go. And you’re scaling these industries for all of our electricity use, and they’re bearing the venture risk in doing that. And that’s a huge positive impact.
The next one is one of the funny pieces of artifact of history is, you know, Google, obviously great engineers, runs really efficient datacenters. And the internal dialog was, no, no, AI is not going to be—we’re already totally on top of this. AI is not going to be able to help us run the efficiency of our internal Google grid. They applied DeepMind. They had a 15 percent saving. Think of what you could do with that with more broad grids, and all the rest. So even that’s line of sight with the use of AI for helping—would, like, make electricity much more efficient and energy use doing it. Then, of course, you know, we have the science fiction projects which, you know, might play out, which is, hey, maybe we can have fusion work because we have AI managing the magnetic shielding. Or maybe, right? (Laughs.) But all of that is, I think, super positive.
Now, to the other part of it is, I do think that, like, we have—just like we have, call it, infinite use for electricity at a certain price, we also have infinite use for intelligence or compute at a certain price. And so I think that’s—like, steering into that future is the exact right thing for our industry to be doing. It’s the exact right thing for us as kind of American entrepreneurs and society be doing. And, you know, one of the things that, you know, in the last couple weeks as I’ve been thinking about this and trying to, you know, kind of articulate why us being bold is good, is I don’t just hope AI is going to be amplification intelligence, but also perhaps American intelligence.
OPERATOR: We will take our next question from Wyatt Smith. Please remember to state your affiliation.
Q: Hey, Reid. I’m Wyatt Smith. I’m the CEO of UpSmith. We’re a Series A startup in Texas. Before that, though, I was on the team at Uber leading PD for Elevate.
And given your leadership in the industry, I’d love to get your read on what the current state of the policy environment is for urban air mobility for distributed electric propulsion, for aircraft going through the fly.
HOFFMAN: So this is—Wyatt knows that I was—I am an investor, which maybe I should probably not talk at length on this question, and formally on the board of Joby. And I do think air mobility is an extremely important thing to do, and is classically—part of the reason I like staying in the software industry and investing is, like, all of this regulatory stuff makes it much more expensive and slow and difficult. And I think we want it, for various reasons—traffic congestion and a bunch of other things. As far as I can tell, it’s still kind of, what is it, trundling towards takeoff, like, trundling down the road.
PAREKH: There was that supersonic flight the other day, right?
HOFFMAN: Yes, Boom, which I’m also an investor in.
PAREKH: OK, there you go.
HOFFMAN: Right.
PAREKH: Just shilling for you. (Laughs.)
HOFFMAN: Yeah, thank you. Thank you. You can do it. (Laughter.) And so I think it’s progressing reasonably well, but obviously it’s one of the things that—like, I think in many of our industries, thinking about how to facilitate the right kinds of innovation and risk taking is one of the things that I think is important for us to do.
PAREKH: We’ll take—I was pointing over there before, so I want to make sure we get—
Q: Justin Vogt, Foreign Affairs.
I noticed in your bio that you invest in cryptocurrency. This morning, the New York Times published an interview with Bill Gates in which he said that cryptocurrency has no value. Why is he wrong?
HOFFMAN: So there’s a stack of reasons. (Laughter.) And I—like, Bill and I talk about this. And like, if you want to see kind of part of how deeply I respect and I’m amazed by Bill, go to the Possible podcast interview we did with him. Tour de force on just tons of interesting subjects that learn from him all the time.
Now, one is that, you know, you already have—here’s the simplest way of looking at, because rather than going a lot of depth in this. Part of what I think the cryptocurrency stuff creates is a baseline safety net for countries whose financial governance system is broken or breaking, right? So don’t think U.S. Think Venezuela, right? (Laughs.) And, now, it may also eventually become a good kind of distributed kind of internet way of doing identity validation, asset management, a bunch of other things—all of which the industry talks about a lot and could be that future thing is, I think, very possible. But I think even that kind of safety net of, you know, round up to 200 countries, you know, many of those countries have broken kind of financial governance systems. And I think it’s helpful in that vector. So that would be one. But I think there’s also a lot of innovation. And I tend to think in technology you always got to be thinking about where the puck is moving to versus where it is.
PAREKH: Go there in the back. Yeah.
Q: Thank you. Jonathan Guyer, writer and term member here.
Wondering if you could talk about how your thinking on AI for the military has evolved over time. I know you gave a lecture here about five years ago about this. You’re on the Defense Innovation Board. OpenAI has been moving into military work. And, you know, Microsoft is playing quite a role in Israel’s campaign in Gaza. How are you thinking about AI’s role in defense, national security?
HOFFMAN: So part of the discussion I frequently have in Silicon Valley—because there’s a thread in Silicon Valley says, oh, you should have nothing to do with the national security establishment because of, you know, like, we should be anti-war. And they don’t realize the Department of Defense is, actually, in fact, anti-war. Part of safety and security is to prevent the war from happening. And so part of the reason why, you know, I’ve been honored to serve on the innovation board and have tried to bring things.
And I think, you know, part of when I did this the first time under—with Ash Carter, you know, part of a lot of the internal discussions is, this AI thing is coming. We need to have that very deeply baked into what—how we’re thinking about and doing and thinking everything from cyber to intelligence. And, of course, there’s complicated issues around autonomous weapons that, you know, we’ll need Geneva conventions around, and a stack of those. But I think all technologies that could be deployed, we want to figure out how to have them reinforce kind of a state of peace. And I think that’s actually the DOD mission. So that’s, in a very broad brush, what that—there’s a ton of questions and a ton of work to do there.
PAREKH: Right here in the front.
Q: Thank you very much. Joseph Gasparro, Royal Bank of Canada.
One of your competitors, Anthropic, was founded by an Italian American in Italy at the time, and they still chose to incorporate in the U.S. You’re talking to European leaders about not being a referee. Draghi’s report on, you know, competitiveness—in the near term do you think the EU becomes more accommodative for AI? Or do you think they’re still going to be, you know, medium, longer term, and the U.S. will continue to just, you know, control it? Thank you.
HOFFMAN: So, well, if my small influence has anything, I hope they do enter into the creating and innovating in various kind of paths of discussions that I give them. And I think every—I try to help every leader of every kind of Western-style democracy that I talk to in this, because I think it’s good to have multiple different participants in that. And I think that’s great to do. If you ask me to predict, I would say their prediction will continue to tend to be, you know, let’s set up a committee to indicate what should be allowed. And that’s not very predictive of success.
PAREKH: Back there on the other side. Right there.
Q: Dhruv Singh, DEWS Holdings.
With the Sputnik moment here upon all of us, what do you think this administration needs to do to make sure the U.S. wins and that we’re the standard? (Laughter.)
HOFFMAN: So, look, I think the key thing is we have a lot of strong, capable organizations, companies doing this. And is how do you enable them? Also, how do you have it—shape it to have overall very positive social effects, and kind of prevent certain harms? But. like, for example, I actually think that the stargate stuff is actually good to do. I think there’s areas that we are—like, onshore chip manufacturing. You know, one of the things that’s important to do. Getting the energy and datacenters, very important to do. Those things are more difficult to do with any one—kind of any one company side. So I think doing that with industry.
Now, part of what I tend to advise the governments around the world to do is say, look, work with companies in helping us. And that’s work with startups, work with scalers, scale ups, and work with large companies. Work with all of them, because all of them are bringing different kind of competencies to this. And try to enable that as a way of playing forward.
PAREKH: Just on that, like, the CHIPS Act, last administration, $50 billion I don’t think it’s all been allocated. Microsoft announced this week 80 billion (dollars) of capex this year. Can you—industrial policy work? Is there enough dollars for us to actually allocate to move the needle? Like, is that $50 billion really going to move the needle on kind of domestic chips, relative to Taiwan?
HOFFMAN: Well, I think it can. It’s not going to be quick. And part of it is the strategy always for these things is don’t try to—like, for example, when you think 5G and 6G, you don’t try to go, oh, let’s try to replicate all the things that are currently there. Let’s go to the 7G as kind of a way of doing it. And the that’s the general way, as you know, from our industry.
PAREKH: Right over there.
Q: Hi. Lauren Wagner with ARC Prize.
I think in 2025, nineteen AI bills are being proposed every day. I’m wondering what you think about this kind of influx of information. And we have this time of technological change. Is it time for a new paradigm for technology policymaking and policymaking in general?
HOFFMAN: Well, so generally I think that it’s extreme—it would be extremely good to get more forward-foot and better on technology. Part of the baseline tends to be, is how do we have technologists embedded in government, consulting with government is, I think, key. But I also think the question around—that’s part of the reason why I’ve been kind of gesturing at, as opposed to, like, that you have a regulator body that you have to go and do a whole bunch of approvals through as a clock, generally speaking, doesn’t work for most of, like, the software and faster-moving technologies. I think it’s kind of the question of how do you define your dashboards? How do you define what the measurement is? How do you get—you know, how do you say, hey, I want auditors to figure out, you know, this problem around, you know, kind of information flow within media networks, and I want to report, you know, on this kind of thing? That kind of thing is, I think, the general approach. But I—you know, we’re obviously—every society, including ours, is decades behind on that.
PAREKH: Time for probably one final question. So we’ll go over here.
Q: Alex Wallace. IESE Business School.
Ton of time and focus and money has gone towards efficiency and productivity in AI. Are you optimistic that the same amount of time and money can be given to AI to focus on larger global issues—cancer, you’re doing, which is great, climate change, the decline in democracy, wealth inequality?
HOFFMAN: Well, one of the good news is I don’t think you necessarily need the same amount of money when you’re creating the general technological base as much as kind of the applications of it. So, for example, with Manas, we’re not going to try to create a, you know, $10 billion computer to do it. We can leverage all of the other work doing it. And that leverage of that to all those vectors, that’s part of the, hey, let’s make sure those connections are being made and happen. Will it be enough? If not, let’s push on it some. But that would be the vector.
Now, just as a fun comment for—because one of the things—I was down at Wharton on Monday, and had a really great conversation with Ethan Mullick on this. One of the things I think is interesting, when you begin to think that all professional work is going to be deployed with people with multiple agents helping them do that. That’s a near certainty in the future. Then essentially, the management of those agents is going to be one of the key skills. And how that goes into management skills and how people learn and what they do is, I think, going to be an enormously interesting thing.
PAREKH: So, with that, I’d like to thank you all for coming. Thank you, Carolyn, for you and your family’s support. (Applause.) And, most importantly, thank Reid.
(END)
This is an uncorrected transcript.